13 research outputs found

    A unified programming system for a multi-paradigm parallel architecture

    Get PDF
    Real time image understanding and image generation require very large amounts of computing power. A possible way to meet these requirements is to make use of the power available from parallel computing systems. However parallel machines exhibit performance which is highly dependent on the algorithms being executed. Both image understanding and image generation involve the use of a wide variety of algorithms. A parallel machine suited to some of these algorithms may be unsuited to others. This thesis describes a novel heterogeneous parallel architecture optimised for image based applications. It achieves its performance by combining two different forms of parallel architecture, namely fine grain SIMD and course grain MIMD, into a single architecture. In this way it is possible to match the most appropriate computing resource to each algorithm in a given application. As important as the architecture itself is a method for programming it. This thesis describes a novel multi-paradigm programming language based on C++, which allows programs which make use of both control and data parallelism to be expressed in a single coherent framework, based on object oriented programming. To demonstrate the utility of both the architecture and the programming system, two applications, one from the field of image understanding the other image generation are examined. These applications combine some novel algorithms with other novel implementation approaches to provide the most effective mapping onto this architecture

    Shifts in the size and distribution of marine trophy fishing world records

    Get PDF
    The extensive nature of recreational angling makes it difficult to explore trends in global catches. However, trophy fishing world records may provide an insight into recreational fishing pressure on the largest species and size classes. Trophy fishing is promoted and recorded by the International Game Fish Association (IGFA), who manage an 80-year database on the largest individuals of a species caught – called all-tackle records (ATRs) – with information on the size and location of each record catch. We analyse these data to explore temporal trends in the size of record-setting fishes, determine how past and present ATR catches are distributed globally, and examine trends in records for International Union for Conservation of Nature (IUCN) threatened species. The number of ATRs, and the number of species awarded an ATR, have increased significantly over the past 80 years. New records are for increasingly smaller maximum-sized species of fish, with the average sized record shifting from 167.7 kg in the 1950s to 8.1 kg in the 2010s. ATRs for species listed as threatened (Vulnerable or higher) on the IUCN Red List have also declined by approximately 66% over the past two decades. Records were unevenly distributed around the world but have spread globally over time. Historically, ATRs were concentrated around the coastline of the USA but in recent decades more were reported in areas such as Japan and New Zealand. These data either reflect a shift away from mainly targeting large taxa to targeting a wider variety of smaller species, or that there are now limited larger specimens and so fewer ATRs are being set. Additionally, the scarcity of new records for threatened species appears to support IUCN assessments of their poor stock status. The spread of ATRs suggests a growing pressure on the largest size classes in regions with previously little trophy fishing pressure. We encourage the greater use of catch-and-release initiatives and mandatory data collection for all near records to better quantify trophy fishing pressure and ensure sustainable practices

    The utility of low-density genotyping for imputation in the Thoroughbred horse

    Get PDF
    BACKGROUND: Despite the dramatic reduction in the cost of high-density genotyping that has occurred over the last decade, it remains one of the limiting factors for obtaining the large datasets required for genomic studies of disease in the horse. In this study, we investigated the potential for low-density genotyping and subsequent imputation to address this problem. RESULTS: Using the haplotype phasing and imputation program, BEAGLE, it is possible to impute genotypes from low- to high-density (50K) in the Thoroughbred horse with reasonable to high accuracy. Analysis of the sources of variation in imputation accuracy revealed dependence both on the minor allele frequency of the single nucleotide polymorphisms (SNPs) being imputed and on the underlying linkage disequilibrium structure. Whereas equidistant spacing of the SNPs on the low-density panel worked well, optimising SNP selection to increase their minor allele frequency was advantageous, even when the panel was subsequently used in a population of different geographical origin. Replacing base pair position with linkage disequilibrium map distance reduced the variation in imputation accuracy across SNPs. Whereas a 1K SNP panel was generally sufficient to ensure that more than 80% of genotypes were correctly imputed, other studies suggest that a 2K to 3K panel is more efficient to minimize the subsequent loss of accuracy in genomic prediction analyses. The relationship between accuracy and genotyping costs for the different low-density panels, suggests that a 2K SNP panel would represent good value for money. CONCLUSIONS: Low-density genotyping with a 2K SNP panel followed by imputation provides a compromise between cost and accuracy that could promote more widespread genotyping, and hence the use of genomic information in horses. In addition to offering a low cost alternative to high-density genotyping, imputation provides a means to combine datasets from different genotyping platforms, which is becoming necessary since researchers are starting to use the recently developed equine 70K SNP chip. However, more work is needed to evaluate the impact of between-breed differences on imputation accuracy

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead
    corecore